音频或视觉数据分析任务通常必须处理高维和非负信号。然而,当数据具有多维数减少预处理时,大多数数据分析方法遭受过度拟合和数值问题。此外,关于如何以及为什么滤波器为音频或可视应用的方式工作是所需的属性,特别是当涉及能量或频谱信号时。在这些情况下,由于这些信号的性质,滤波器重量的非承诺是所需的性质,以更好地理解其工作。由于这两个必需品,我们提出了不同的方法来减少数据的维度,而保证溶液的非承诺和可解释性。特别是,我们提出了一种广义方法,以在处理非负数据的应用程序中以监督方式设计过滤器银行,并且我们探讨了解决所提出的目标函数的不同方式,包括非负面的部分最小二乘法的非负图。我们分析了通过拟议的两种不同和广泛研究的应用方法获得的特征的辨别力:纹理和音乐类型分类。此外,我们比较我们的方法实现的滤波器银行,具体设计用于特征提取的其他最先进的方法。
translated by 谷歌翻译
Models of sensory processing and learning in the cortex need to efficiently assign credit to synapses in all areas. In deep learning, a known solution is error backpropagation, which however requires biologically implausible weight transport from feed-forward to feedback paths. We introduce Phaseless Alignment Learning (PAL), a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies. This is achieved by exploiting the noise naturally found in biophysical systems as an additional carrier of information. In our dynamical system, all weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses. Our method is completely phase-free (no forward and backward passes or phased learning) and allows for efficient error propagation across multi-layer cortical hierarchies, while maintaining biologically plausible signal transport and learning. Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment: compared to random synaptic feedback, it can solve complex tasks with less neurons and learn more useful latent representations. We demonstrate this on various classification tasks using a cortical microcircuit model with prospective coding.
translated by 谷歌翻译
The Predicting Media Memorability task in the MediaEval evaluation campaign has been running annually since 2018 and several different tasks and data sets have been used in this time. This has allowed us to compare the performance of many memorability prediction techniques on the same data and in a reproducible way and to refine and improve on those techniques. The resources created to compute media memorability are now being used by researchers well beyond the actual evaluation campaign. In this paper we present a summary of the task, including the collective lessons we have learned for the research community.
translated by 谷歌翻译
Over the last years, topic modeling has emerged as a powerful technique for organizing and summarizing big collections of documents or searching for particular patterns in them. However, privacy concerns arise when cross-analyzing data from different sources is required. Federated topic modeling solves this issue by allowing multiple parties to jointly train a topic model without sharing their data. While several federated approximations of classical topic models do exist, no research has been carried out on their application for neural topic models. To fill this gap, we propose and analyze a federated implementation based on state-of-the-art neural topic modeling implementations, showing its benefits when there is a diversity of topics across the nodes' documents and the need to build a joint model. Our approach is by construction theoretically and in practice equivalent to a centralized approach but preserves the privacy of the nodes.
translated by 谷歌翻译
This paper presents an automatic approach to creating taxonomies of technical terms based on the Cooperative Patent Classification (CPC). The resulting taxonomy contains about 170k nodes in 9 separate technological branches and is freely available. We also show that a Text-to-Text Transfer Transformer (T5) model can be fine-tuned to generate hypernyms and hyponyms with relatively high precision, confirming the manually assessed quality of the resource. The T5 model opens the taxonomy to any new technological terms for which a hypernym can be generated, thus making the resource updateable with new terms, an essential feature for the constantly evolving field of technological terminology.
translated by 谷歌翻译
A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations, which spreads through collected data. When not properly accounted for, machine learning (ML) models learned from data can reinforce the structural biases already present in society. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches show regularly biased behaviors. However, we show that standard mitigation techniques, and our own post-hoc method, can be effective in reducing the level of unfair bias. We provide practical recommendations to develop ML models for depression risk prediction with increased fairness and trust in the real world. No single best ML model for depression prediction provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions.
translated by 谷歌翻译
In this work, we propose a framework relying solely on chat-based customer support (CS) interactions for predicting the recommendation decision of individual users. For our case study, we analyzed a total number of 16.4k users and 48.7k customer support conversations within the financial vertical of a large e-commerce company in Latin America. Consequently, our main contributions and objectives are to use Natural Language Processing (NLP) to assess and predict the recommendation behavior where, in addition to using static sentiment analysis, we exploit the predictive power of each user's sentiment dynamics. Our results show that, with respective feature interpretability, it is possible to predict the likelihood of a user to recommend a product or service, based solely on the message-wise sentiment evolution of their CS conversations in a fully automated way.
translated by 谷歌翻译
In this paper we present a novel multi-attribute face manipulation method based on textual descriptions. Previous text-based image editing methods either require test-time optimization for each individual image or are restricted to single attribute editing. Extending these methods to multi-attribute face image editing scenarios will introduce undesired excessive attribute change, e.g., text-relevant attributes are overly manipulated and text-irrelevant attributes are also changed. In order to address these challenges and achieve natural editing over multiple face attributes, we propose a new decoupling training scheme where we use group sampling to get text segments from same attribute categories, instead of whole complex sentences. Further, to preserve other existing face attributes, we encourage the model to edit the latent code of each attribute separately via an entropy constraint. During the inference phase, our model is able to edit new face images without any test-time optimization, even from complex textual prompts. We show extensive experiments and analysis to demonstrate the efficacy of our method, which generates natural manipulated faces with minimal text-irrelevant attribute editing. Code and pre-trained model will be released.
translated by 谷歌翻译
生成模型生成的合成数据可以增强医学成像中渴望数据深度学习模型的性能和能力。但是,(1)(合成)数据集的可用性有限,并且(2)生成模型训练很复杂,这阻碍了它们在研究和临床应用中的采用。为了减少此入口障碍,我们提出了Medigan,Medigan是一站式商店,用于验证的生成型号,该型号是开源框架 - 不合骨python图书馆。 Medigan允许研究人员和开发人员仅在几行代码中创建,增加和域名。在基于收集的最终用户需求的设计决策的指导下,我们基于生成模型的模块化组件(i)执行,(ii)可视化,(iii)搜索和排名以及(iv)贡献。图书馆的可伸缩性和设计是通过其越来越多的综合且易于使用的验证生成模型来证明的,该模型由21种模型组成,利用9种不同的生成对抗网络体系结构在4个域中在11个数据集中训练,即乳腺摄影,内窥镜检查,X射线和X射线和X射线镜头,X射线和X型。 MRI。此外,在这项工作中分析了Medigan的3个应用,其中包括(a)启用社区范围内的限制数据共享,(b)研究生成模型评估指标以及(c)改进临床下游任务。在(b)中,扩展了公共医学图像综合评估和报告标准,我们根据图像归一化和特定于放射学特征提取了Fr \'Echet Inception距离变异性。
translated by 谷歌翻译
在(特殊的)平滑样条问题中,一个人考虑了二次数据保真惩罚和拉普拉斯正则化的变异问题。可以通过用聚拉普拉斯的正规机构代替拉普拉斯的常规机构来获得较高的规律性。该方法很容易适应图,在这里,我们考虑在完全监督的,非参数,噪声损坏的回归问题中图形多拉普拉斯正则化。特别是,给定一个数据集$ \ {x_i \} _ {i = 1}^n $和一组嘈杂的标签$ \ {y_i \} _ {i = 1}^n \ subset \ subset \ mathbb {r}令$ u_n:\ {x_i \} _ {i = 1}^n \ to \ mathbb {r} $是由数据保真项组成的能量的最小化器,由数据保真术语和适当缩放的图形poly-laplacian项组成。当$ y_i = g(x_i)+\ xi_i $,对于IID噪声$ \ xi_i $,并使用几何随机图,我们在大型中识别(高概率)$ u_n $ to $ g $的收敛速率数据限制$ n \ to \ infty $。此外,我们的速率(到对数)与通常的平滑样条模型中已知的收敛速率相吻合。
translated by 谷歌翻译